1,508 research outputs found

    WeakSATD: Detecting Weak Self-admitted Technical Debt

    Get PDF
    Speeding up development may produce technical debt, i.e., not-quite-right code for which the effort to make it right increases with time as a sort of interest. Developers may be aware of the debt as they admit it in their code comments. Literature reports that such a self-admitted technical debt survives for a long time in a program, but it is not yet clear its impact on the quality of the code in the long term. We argue that self-admitted technical debt contains a number of different weaknesses that may affect the security of a program. Therefore, the longer a debt is not paid back the higher is the risk that the weaknesses can be exploited. To discuss our claim and rise the developers' awareness of the vulnerability of the self-admitted technical debt that is not paid back, we explore the self-admitted technical debt in the Chromium C-code to detect any known weaknesses. In this preliminary study, we first mine the Common Weakness Enumeration repository to define heuristics for the automatic detection and fix of weak code. Then, we parse the C-code to find self-admitted technical debt and the code block it refers to. Finally, we use the heuristics to find weak code snippets associated to self-admitted technical debt and recommend their potential mitigation to developers. Such knowledge can be used to prioritize self-admitted technical debt for repair. A prototype has been developed and applied to the Chromium code. Initial findings report that 55% of self-admitted technical debt code contains weak code of 14 different types

    Automated test-based learning and verification of performance models for microservices systems

    Get PDF
    Effective and automated verification techniques able to provide assurances of performance and scalability are highly demanded in the context of microservices systems. In this paper, we introduce a methodology that applies specification-driven load testing to learn the behavior of the target microservices system under multiple deployment configurations. Testing is driven by realistic workload conditions sampled in production. The sampling produces a formal description of the users' behavior through a Discrete Time Markov Chain. This model drives multiple load testing sessions that query the system under test and feed a Bayesian inference process which incrementally refines the initial model to obtain a complete specification from run-time evidence as a Continuous Time Markov Chain. The complete specification is then used to conduct automated verification by using probabilistic model checking and to compute a configuration score that evaluates alternative deployment options. This paper introduces the methodology, its theoretical foundation, and the toolchain we developed to automate it. Our empirical evaluation shows its applicability, benefits, and costs on a representative microservices system benchmark. We show that the methodology detects performance issues, traces them back to system-level requirements, and, thanks to the configuration score, provides engineers with insights on deployment options. The comparison between our approach and a selected state-of-the-art baseline shows that we are able to reduce the cost up to 73% in terms of number of tests. The verification stage requires negligible execution time and memory consumption. We observed that the verification of 360 system-level requirements took ~1 minute by consuming at most 34 KB. The computation of the score involved the verification of ~7k (automatically generated) properties verified in ~72 seconds using at most ~50 KB. (C)& nbsp;2022 The Author(s). Published by Elsevier Inc.& nbsp

    Taming Model Uncertainty in Self-adaptive Systems Using Bayesian Model Averaging

    Get PDF
    Research on uncertainty quantification and mitigation of software-intensive systems and (self-)adaptive systems, is increasingly gaining momentum, especially with the availability of statistical inference techniques (such as Bayesian reasoning) that make it possible to mitigate uncertain (quality) attributes of the system under scrutiny often encoded in the system model in terms of model parameters. However, to the best of our knowledge, the uncertainty about the choice of a specific system model did not receive the deserved attention.This paper focuses on self-adaptive systems and investigates how to mitigate the uncertainty related to the model selection process, that is, whenever one model is chosen over plausible alternative and competing models to represent the understanding of a system and make predictions about future observations. In particular, we propose to enhance the classical feedback loop of a self-adaptive system with the ability to tame the model uncertainty using Bayesian Model Averaging. This method improves the predictions made by the analyze component as well as the plan that adopts metaheuristic optimizing search to guide the adaptation decisions. Our empirical evaluation demonstrates the cost-effectiveness of our approach using an exemplar case study in the robotics domain

    Towards Bathymetry-Optimized Doppler Re-Navigation for AUVs

    Full text link
    This paper describes a terrain-aided re-navigation algorithm for autonomous underwater vehicles (AUVs) built around optimizing bottom-lock Doppler velocity log (DVL) tracklines relative to a ship derived bathymetric map. The goal of this work is to improve the precision of AUV DVLbased navigation for near-seafloor science by removing the lowfrequency “drift” associated with a dead-reckoned (DR) Doppler navigation methodology. To do this, we use the discrepancy between vehicle-derived vs. ship-derived acoustic bathymetry as a corrective error measure in a standard nonlinear optimization framework. The advantage of this re-navigation methodology is that it exploits existing ship-derived bathymetric maps to improve vehicle navigation without requiring additional infrastructure. We demonstrate our technique for a recent AUV survey of largescale gas blowout features located along the U.S. Atlantic margin.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86050/1/reustice-30.pd

    ASTEF: A Simple Tool for Examining Fixation

    Get PDF
    In human factors and ergonomics research, the analysis of eye movements has gained popularity as a method for obtaining information concerning the operator's cognitive strategies and for drawing inferences about the cognitive state of an individual. For example, recent studies have shown that the distribution of eye fixations is sensitive to variations in mental workload---dispersed when workload is high, and clustered when workload is low. Spatial statistics algorithms can be used to obtain information about the type of distribution and can be applied over fixations recorded during small epochs of time to assess online changes in the level of mental load experienced by the individuals. In order to ease the computation of the statistical index and to encourage research on the spatial properties of visual scanning, A Simple Tool for Examining Fixations has been developed. The software application implements functions for fixation visualization, management, and analysis, and includes a tool for fixation identification from raw gaze point data. Updated information can be obtained online at www.astef.info, where the installation package is freely downloadable

    Coping with the State Explosion Problem in Formal Methods: Advanced Abstraction Techniques and Big Data Approaches.

    Get PDF
    Formal verification of dynamic, concurrent and real-time systems has been the focus of several decades of software engineering research. Formal verification requires high-performance data processing software for extracting knowledge from the unprecedented amount of data containing all reachable states and all transitions that systems can make among those states, for instance, the extraction of specific reachable states, traces, and more. One of the most challenging task in this context is the development of tools able to cope with the complexity of real-world models analysis. Many methods have been proposed to alleviate this problem. For instance, advanced state space techniques aim at reducing the data needed to be constructed in order to verify certain properties. Other directions are the efficient implementation of such analysis techniques, and studying ways to parallelize the algorithms in order to exploit multi-core and distributed architectures. Since cloud-based computing resources have became easily accessible, there is an opportunity for verification techniques and tools to undergo a deep technological transition to exploit the new available architectures. This has created an increasing interest in parallelizing and distributing verification techniques. Cloud computing is an emerging and evolving paradigm where challenges and opportunities allow for new research directions and applications. There is an evidence that this trend will continue, in fact several companies are putting remarkable efforts in delivering services able to offer hundreds, or even thousands, commodity computers available to customers, thus enabling users to run massively parallel jobs. This revolution is already started in different scientific fields, achieving remarkable breakthroughs through new kinds of experiments that would have been impossible only few years ago. Anyway, despite many years of work in the area of multi-core and distributed model checking, still few works introduce algorithms that can scale effortlessly to the use of thousands of loosely connected computers in a network, so existing technology does not yet allow us to take full advantage of the vast array of compute power of a "cloud" environment. Moreover, despite model checking software tools are so called "push-button", managing a high-performance computing environment required by distributed scientific applications, is far from being considered such, especially whenever one wants to exploit general purpose cloud computing facilities. The thesis focuses on two complementary approaches to deal with the state explosion problem in formal verification. On the one hand we try to decrease the exploration space by studying advanced state space methods for real-time systems modeled with Time Basic Petri nets. In particular, we addressed and solved several different open problems for such a modeling formalism. On the other hand, we try to increase the computational power by introducing approaches, techniques and software tools that allow us to leverage the "big data" trend to some extent. In particular, we provided frameworks and software tools that can be easily specialized to deal with the construction and verification of very huge state spaces of different kinds of formalisms by exploiting big data approaches and cloud computing infrastructures

    The BAR Domain Superfamily: Membrane-Molding Macromolecules

    Get PDF
    Membrane-shaping proteins of the BAR domain superfamily are determinants of organelle biogenesis, membrane trafficking, cell division, and cell migration. An upsurge of research now reveals new principles of BAR domain-mediated membrane remodeling, enhancing our understanding of membrane curvature-mediated information processing

    Online Model-Based Testing under Uncertainty

    Get PDF
    Modern software systems are required to operate in a highly uncertain and changing environment. They have to control the satisfaction of their requirements at run-time, and possibly adapt and cope with situations that have not been completely addressed at design-time. Software engineering methods and techniques are, more than ever, forced to deal with change and uncertainty (lack of knowledge) explicitly. For tackling the challenge posed by uncertainty in delivering more reliable systems, this paper proposes a novel online Model-based Testing technique that complements classic test case generation based on pseudo-random sampling strategies with an uncertainty-aware sampling strategy. To deal with system uncertainty during testing, the proposed strategy builds on an Inverse Uncertainty Quantification approach that is related to the discrepancy between the measured data at run-time (while the system executes) and a Markov Decision Process model describing the behavior of the system under test. To this purpose, a conformance game approach is adopted in which tests feed a Bayesian inference calibrator that continuously learns from test data to tune the system model and the system itself. A comparative evaluation between the proposed uncertainty-aware sampling policy and classical pseudo-random sampling policies is also presented using the Tele Assistance System running example, showing the differences in achieved accuracy and efficiency

    Reserves forecasting for open market operations

    Get PDF
    Bank reserves ; Open market operations

    The value function of an asymptotic exit-time optimal control problem

    Full text link
    We consider a class of exit--time control problems for nonlinear systems with a nonnegative vanishing Lagrangian. In general, the associated PDE may have multiple solutions, and known regularity and stability properties do not hold. In this paper we obtain such properties and a uniqueness result under some explicit sufficient conditions. We briefly investigate also the infinite horizon problem
    • …
    corecore